3,745 research outputs found
Preoperative neutrophil-lymphocyte ratio and outcome from coronary artery bypass grafting
Background: An elevated preoperative white blood cell count has been associated with a worse outcome after coronary artery bypass grafting (CABG). Leukocyte subtypes, and particularly the neutrophil-lymphocyte (N/L) ratio, may however, convey superior prognostic information. We hypothesized that the N/L ratio would predict the outcome of patients undergoing surgical revascularization. Methods: Baseline clinical details were obtained prospectively in 1938 patients undergoing CABG. The differential leukocyte was measured before surgery, and patients were followed-up 3.6 years later. The primary end point was all-cause mortality. Results: The preoperative N/L ratio was a powerful univariable predictor of mortality (hazard ratio [HR] 1.13 per unit, P 3.36). Conclusion: An elevated N/L ratio is associated with a poorer survival after CABG. This prognostic utility is independent of other recognized risk factors.Peer reviewedAuthor versio
Adaptive foveated single-pixel imaging with dynamic super-sampling
As an alternative to conventional multi-pixel cameras, single-pixel cameras
enable images to be recorded using a single detector that measures the
correlations between the scene and a set of patterns. However, to fully sample
a scene in this way requires at least the same number of correlation
measurements as there are pixels in the reconstructed image. Therefore
single-pixel imaging systems typically exhibit low frame-rates. To mitigate
this, a range of compressive sensing techniques have been developed which rely
on a priori knowledge of the scene to reconstruct images from an under-sampled
set of measurements. In this work we take a different approach and adopt a
strategy inspired by the foveated vision systems found in the animal kingdom -
a framework that exploits the spatio-temporal redundancy present in many
dynamic scenes. In our single-pixel imaging system a high-resolution foveal
region follows motion within the scene, but unlike a simple zoom, every frame
delivers new spatial information from across the entire field-of-view. Using
this approach we demonstrate a four-fold reduction in the time taken to record
the detail of rapidly evolving features, whilst simultaneously accumulating
detail of more slowly evolving regions over several consecutive frames. This
tiered super-sampling technique enables the reconstruction of video streams in
which both the resolution and the effective exposure-time spatially vary and
adapt dynamically in response to the evolution of the scene. The methods
described here can complement existing compressive sensing approaches and may
be applied to enhance a variety of computational imagers that rely on
sequential correlation measurements.Comment: 13 pages, 5 figure
Multigrid preconditioners for the hybridised discontinuous Galerkin discretisation of the shallow water equations
17 USC 105 interim-entered record; under review.The article of record as published may be found at https://doi.org/10.1016/j.jcp.2020.109948Numerical climate- and weather-prediction models require the fast solution of the
equations of fluid dynamics. Discontinuous Galerkin (DG) discretisations have several
advantageous properties. They can be used for arbitrary domains and support a structured
data layout, which is particularly important on modern chip architectures. For smooth
solutions, higher order approximations can be particularly efficient since errors decrease
exponentially in the polynomial degree. Due to the wide separation of timescales in
atmospheric dynamics, semi-implicit time integrators are highly efficient, since the implicit
treatment of fast waves avoids tight constraints on the time step size, and can therefore
improve overall efficiency. However, if implicit-explicit (IMEX) integrators are used, a large
linear system of equations has to be solved in every time step. A particular problem for DG
discretisations of velocity-pressure systems is that the normal Schur-complement reduction
to an elliptic system for the pressure is not possible since the numerical fluxes introduce
artificial diffusion terms. For the shallow water equations, which form an important model
system, hybridised DG methods have been shown to overcome this issue. However, no
attention has been paid to the efficient solution of the resulting linear system of equations.
In this paper we address this issue and show that the elliptic system for the flux unknowns
can be solved efficiently by using a non-nested multigrid algorithm. The method is
implemented in the Firedrake library and we demonstrate the excellent performance of
the algorithm both for an idealised stationary flow problem in a flat domain and for nonstationary setups in spherical geometry from the well-known testsuite in Williamson et
al. (1992) [23]. In the latter case the performance of our bespoke multigrid preconditioner
(although itself not highly optimised) is comparable to that of a highly optimised direct
solver.EPSRCEPSRCEP/L015684/1UK-Fluids network (EPSRC grant EP/N032861/1
i-RheoFT: Fourier transforming sampled functions without artefacts
In this article we present a new open-access code named âi-RheoFTâ that implements the analytical method first introduced in [PRE, 80, 012501 (2009)] and then enhanced in [New J Phys 14, 115032 (2012)], which allows to evaluate the Fourier transform of any generic time-dependent function that vanishes for negative times, sampled at a finite set of data points that extend over a finite range, and need not be equally spaced. I-RheoFT has been employed here to investigate three important experimental factors: (i) the âdensity of initial experimental pointsâ describing the sampled function, (ii) the interpolation function used to perform the âvirtual oversamplingâ procedure introduced in [New J Phys 14, 115032 (2012)], and (iii) the detrimental effect of noises on the expected outcomes. We demonstrate that, at relatively high signal-to-noise ratios and density of initial experimental points, all three built-in MATLAB interpolation functions employed in this work (i.e., Spline, Makima and PCHIP) perform well in recovering the information embedded within the original sampled function; with the Spline function performing best. Whereas, by reducing either the number of initial data points or the signal-to-noise ratio, there exists a threshold below which all three functions perform poorly; with the worst performance given by the Spline function in both the cases and the least worst by the PCHIP function at low density of initial data points and by the Makima function at relatively low signal-to-noise ratios. We envisage that i-RheoFT will be of particular interest and use to all those studies where sampled or time-averaged functions, often defined by a discrete set of data points within a finite time-window, are exploited to gain new insights on the systemsâ dynamics
Enhanced Optical Trapping
Optical tweezers have contributed substantially to the advancement of micro-manipulation. However, they do have restrictions, mainly the limited range of materials that yield to optical trapping. Here we propose a method of employing optically trapped objects to manipulate the surrounding fluid and thus particles freely diffusing within it. We create and investigate a reconfigurable active-feedback system of optically trapped actuators, capable of manipulating translational and rotational motion of one or more nearby free objects
Uric acid levels and outcome from coronary artery bypass grafting
ObjectiveElevated uric acid levels have been associated with an adverse cardiovascular outcome in several settings. Their utility in patients undergoing surgical revascularization has not, however, been assessed. We hypothesized that serum uric acid levels would predict the outcome of patients undergoing coronary artery bypass grafting.MethodsThe study cohort consisted of 1140 consecutive patients undergoing nonemergency coronary artery bypass grafting. Clinical details were obtained prospectively, and serum uric acid was measured a median of 1 day before surgery. The primary end point was all-cause mortality.ResultsDuring a median of 4.5 years, 126 patients (11%) died. Mean (± standard deviation) uric acid levels were 390 ± 131 ÎŒmol/L in patients who died versus 353 ± 86 ÎŒmol/L among survivors (hazard ratio 1.48 per 100 ÎŒmol/L; 95% confidence interval, 1.25â1.74; P < .001). The excess risk associated with an elevated uric acid was particularly evident among patients in the upper quartile (â„410 ÎŒmol/L; hazard ratio vs all other quartiles combined 2.18; 95% confidence interval, 1.53â3.11; P < .001). After adjusting for other potential prognostic variables, including the European System for Cardiac Operative Risk Evaluation, uric acid remained predictive of outcome.ConclusionIncreasing levels of uric acid are associated with poorer survival after coronary artery bypass grafting. Their prognostic utility is independent of other recognized risk factors, including the European System for Cardiac Operative Risk Evaluation
Calibration of the distance scale from galactic Cepheids: I Calibration based on the GFG sample
New estimates of the distances of 36 nearby galaxies are presented based on
accurate distances of galactic Cepheids obtained by Gieren, Fouque and Gomez
(1998) from the geometrical Barnes-Evans method.
The concept of 'sosie' is applied to extend the distance determination to
extragalactic Cepheids without assuming the linearity of the PL relation. Doing
so, the distance moduli are obtained in a straightforward way.
The correction for extinction is made using two photometric bands (V and I)
according to the principles introduced by Freedman and Madore (1990). Finally,
the statistical bias due to the incompleteness of the sample is corrected
according to the precepts introduced by Teerikorpi (1987) without introducing
any free parameters (except the distance modulus itself in an iterative
scheme).
The final distance moduli depend on the adopted extinction ratio {R_V}/{R_I}
and on the limiting apparent magnitude of the sample. A comparison with the
distance moduli recently published by the Hubble Space Telescope Key Project
(HSTKP) team reveals a fair agreement when the same ratio {R_V}/{R_I} is used
but shows a small discrepancy at large distance.
In order to bypass the uncertainty due to the metallicity effect it is
suggested to consider only galaxies having nearly the same metallicity as the
calibrating Cepheids (i.e. Solar metallicity). The internal uncertainty of the
distances is about 0.1 magnitude but the total uncertainty may reach 0.3
magnitude.Comment: 12 pages, 4 figures, access to a database of extragalactic Cepheids.
Astronomy & Astrophysics (in press) 200
The HST Key Project on the Extragalactic Distance Scale XXV. A Recalibration of Cepheid Distances to Type Ia Supernovae and the Value of the Hubble Constant
Cepheid-based distances to seven Type Ia supernovae (SNe)-host galaxies have
been derived using the standard HST Key Project on the Extragalactic Distance
Scale pipeline. For the first time, this allows for a transparent comparison of
data accumulated as part of three different HST projects, the Key Project, the
Sandage et al. Type Ia SNe program, and the Tanvir et al. Leo I Group study.
Re-analyzing the Tanvir et al. galaxy and six Sandage et al. galaxies we find a
mean (weighted) offset in true distance moduli of 0.12+/-0.07 mag -- i.e., 6%
in linear distance -- in the sense of reducing the distance scale, or
increasing H0. Adopting the reddening-corrected Hubble relations of Suntzeff et
al. (1999), tied to a zero point based upon SNe~1990N, 1981B, 1998bu, 1989B,
1972E and 1960F and the photometric calibration of Hill et al. (1998), leads to
a Hubble constant of H0=68+/-2(random)+/-5(systematic) km/s/Mpc. Adopting the
Kennicutt et al. (1998) Cepheid period-luminosity-metallicity dependency
decreases the inferred H0 by 4%. The H0 result from Type Ia SNe is now in good
agreement, to within their respective uncertainties, with that from the
Tully-Fisher and surface brightness fluctuation relations.Comment: Accepted for publication in The Astrophysical Journal. 62 pages,
LaTeX, 9 Postscript figures. Also available at
http://casa.colorado.edu/~bgibson/publications.htm
The HST Key Project on the Extragalactic Distance Scale XXVI. The Calibration of Population II Secondary Distance Indicators and the Value of the Hubble Constant
A Cepheid-based calibration is derived for four distance indicators that
utilize stars in the old stellar populations: the tip of the red giant branch
(TRGB), the planetary nebula luminosity function (PNLF), the globular cluster
luminosity function (GCLF) and the surface brightness fluctuation method (SBF).
The calibration is largely based on the Cepheid distances to 18 spiral galaxies
within cz =1500 km/s obtained as part of the HST Key Project on the
Extragalactic Distance Scale, but relies also on Cepheid distances from
separate HST and ground-based efforts. The newly derived calibration of the SBF
method is applied to obtain distances to four Abell clusters in the velocity
range between 3800 and 5000 km/s, observed by Lauer et al. (1998) using the
HST/WFPC2. Combined with cluster velocities corrected for a cosmological flow
model, these distances imply a value of the Hubble constant of H0 = 69 +/- 4
(random) +/- 6 (systematic) km/s/Mpc. This result assumes that the Cepheid PL
relation is independent of the metallicity of the variable stars; adopting a
metallicity correction as in Kennicutt et al. (1998), would produce a (5 +/-
3)% decrease in H0. Finally, the newly derived calibration allows us to
investigate systematics in the Cepheid, PNLF, SBF, GCLF and TRGB distance
scales.Comment: Accepted for publication in the Astrophysical Journal. 48 pages
(including 13 figures and 4 tables), plus two additional tables in landscape
format. Also available at http://astro.caltech.edu/~lff/pub.htm K' SBF
magnitudes have been update
- âŠ